189 research outputs found
Batched Sparse Codes
Network coding can significantly improve the transmission rate of
communication networks with packet loss compared with routing. However, using
network coding usually incurs high computational and storage costs in the
network devices and terminals. For example, some network coding schemes require
the computational and/or storage capacities of an intermediate network node to
increase linearly with the number of packets for transmission, making such
schemes difficult to be implemented in a router-like device that has only
constant computational and storage capacities. In this paper, we introduce
BATched Sparse code (BATS code), which enables a digital fountain approach to
resolve the above issue. BATS code is a coding scheme that consists of an outer
code and an inner code. The outer code is a matrix generation of a fountain
code. It works with the inner code that comprises random linear coding at the
intermediate network nodes. BATS codes preserve such desirable properties of
fountain codes as ratelessness and low encoding/decoding complexity. The
computational and storage capacities of the intermediate network nodes required
for applying BATS codes are independent of the number of packets for
transmission. Almost capacity-achieving BATS code schemes are devised for
unicast networks, two-way relay networks, tree networks, a class of three-layer
networks, and the butterfly network. For general networks, under different
optimization criteria, guaranteed decoding rates for the receiving nodes can be
obtained.Comment: 51 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
The Explicit Coding Rate Region of Symmetric Multilevel Diversity Coding
It is well known that {\em superposition coding}, namely separately encoding
the independent sources, is optimal for symmetric multilevel diversity coding
(SMDC) (Yeung-Zhang 1999). However, the characterization of the coding rate
region therein involves uncountably many linear inequalities and the constant
term (i.e., the lower bound) in each inequality is given in terms of the
solution of a linear optimization problem. Thus this implicit characterization
of the coding rate region does not enable the determination of the
achievability of a given rate tuple. In this paper, we first obtain closed-form
expressions of these uncountably many inequalities. Then we identify a finite
subset of inequalities that is sufficient for characterizing the coding rate
region. This gives an explicit characterization of the coding rate region. We
further show by the symmetry of the problem that only a much smaller subset of
this finite set of inequalities needs to be verified in determining the
achievability of a given rate tuple. Yet, the cardinality of this smaller set
grows at least exponentially fast with . We also present a subset entropy
inequality, which together with our explicit characterization of the coding
rate region, is sufficient for proving the optimality of superposition coding
On Linear Operator Channels over Finite Fields
Motivated by linear network coding, communication channels perform linear
operation over finite fields, namely linear operator channels (LOCs), are
studied in this paper. For such a channel, its output vector is a linear
transform of its input vector, and the transformation matrix is randomly and
independently generated. The transformation matrix is assumed to remain
constant for every T input vectors and to be unknown to both the transmitter
and the receiver. There are NO constraints on the distribution of the
transformation matrix and the field size.
Specifically, the optimality of subspace coding over LOCs is investigated. A
lower bound on the maximum achievable rate of subspace coding is obtained and
it is shown to be tight for some cases. The maximum achievable rate of
constant-dimensional subspace coding is characterized and the loss of rate
incurred by using constant-dimensional subspace coding is insignificant.
The maximum achievable rate of channel training is close to the lower bound
on the maximum achievable rate of subspace coding. Two coding approaches based
on channel training are proposed and their performances are evaluated. Our
first approach makes use of rank-metric codes and its optimality depends on the
existence of maximum rank distance codes. Our second approach applies linear
coding and it can achieve the maximum achievable rate of channel training. Our
code designs require only the knowledge of the expectation of the rank of the
transformation matrix. The second scheme can also be realized ratelessly
without a priori knowledge of the channel statistics.Comment: 53 pages, 3 figures, submitted to IEEE Transaction on Information
Theor
Secure Network Function Computation for Linear Functions -- Part I: Source Security
In this paper, we put forward secure network function computation over a
directed acyclic network. In such a network, a sink node is required to compute
with zero error a target function of which the inputs are generated as source
messages at multiple source nodes, while a wiretapper, who can access any one
but not more than one wiretap set in a given collection of wiretap sets, is not
allowed to obtain any information about a security function of the source
messages. The secure computing capacity for the above model is defined as the
maximum average number of times that the target function can be securely
computed with zero error at the sink node with the given collection of wiretap
sets and security function for one use of the network. The characterization of
this capacity is in general overwhelmingly difficult. In the current paper, we
consider securely computing linear functions with a wiretapper who can
eavesdrop any subset of edges up to a certain size r, referred to as the
security level, with the security function being the identity function. We
first prove an upper bound on the secure computing capacity, which is
applicable to arbitrary network topologies and arbitrary security levels. When
the security level r is equal to 0, our upper bound reduces to the computing
capacity without security consideration. We discover the surprising fact that
for some models, there is no penalty on the secure computing capacity compared
with the computing capacity without security consideration. We further obtain
an equivalent expression of the upper bound by using a graph-theoretic
approach, and accordingly we develop an efficient approach for computing this
bound. Furthermore, we present a construction of linear function-computing
secure network codes and obtain a lower bound on the secure computing capacity
- …